3,917 research outputs found

    Implicit equations involving the pp-Laplace operator

    Get PDF
    In this work we study the existence of solutions u∈W01,p(Ω)u \in W^{1,p}_0(\Omega) to the implicit elliptic problem f(x,u,∇u,Δpu)=0 f(x, u, \nabla u, \Delta_p u)= 0 in Ω \Omega , where Ω \Omega is a bounded domain in RN \mathbb R^N , N≄2 N \ge 2 , with smooth boundary ∂Ω \partial \Omega , 1<p<+∞ 1< p< +\infty , and f ⁣:Ω×R×RN×R→R f\colon \Omega \times \mathbb R \times \mathbb R^N \times \R \to \R . We choose the particular case when the function f f can be expressed in the form f(x,z,w,y)=φ(x,z,w)−ψ(y) f(x, z, w, y)= \varphi(x, z, w)- \psi(y) , where the function ψ \psi depends only on the pp-Laplacian Δpu \Delta_p u . We also present some applications of our results.Comment: 15 pages; comments are welcom

    Theoretical thermodynamic analysis of a closed-cycle process for the conversion of heat into electrical energy by means of a distiller and an electrochemical cell

    Full text link
    We analyse a device aimed at the conversion of heat into electrical energy, based on a closed cycle in which a distiller generates two solutions at different concentrations, and an electrochemical cell consumes the concentration difference, converting it into electrical current. We first study an ideal model of such a process. We show that, if the device works at a single fixed pressure (i.e. with a ``single effect''), then the efficiency of the conversion of heat into electrical power can approach the efficiency of a reversible Carnot engine operating between the boiling temperature of the concentrated solution and that of the pure solvent. When two heat reservoirs with a higher temperature difference are available, the overall efficiency can be incremented by employing an arrangement of multiple cells working at different pressures (``multiple effects''). We find that a given efficiency can be achieved with a reduced number of effects by using solutions with a high boiling point elevation.Comment: The following article has been submitted to Journal of Renewable and Sustainable Energy. After it is published, it will be found at http://scitation.aip.org/content/aip/journal/jrs

    Entity-Linking via Graph-Distance Minimization

    Get PDF
    Entity-linking is a natural-language-processing task that consists in identifying the entities mentioned in a piece of text, linking each to an appropriate item in some knowledge base; when the knowledge base is Wikipedia, the problem comes to be known as wikification (in this case, items are wikipedia articles). One instance of entity-linking can be formalized as an optimization problem on the underlying concept graph, where the quantity to be optimized is the average distance between chosen items. Inspired by this application, we define a new graph problem which is a natural variant of the Maximum Capacity Representative Set. We prove that our problem is NP-hard for general graphs; nonetheless, under some restrictive assumptions, it turns out to be solvable in linear time. For the general case, we propose two heuristics: one tries to enforce the above assumptions and another one is based on the notion of hitting distance; we show experimentally how these approaches perform with respect to some baselines on a real-world dataset.Comment: In Proceedings GRAPHITE 2014, arXiv:1407.7671. The second and third authors were supported by the EU-FET grant NADINE (GA 288956

    Synchronous Context-Free Grammars and Optimal Linear Parsing Strategies

    Full text link
    Synchronous Context-Free Grammars (SCFGs), also known as syntax-directed translation schemata, are unlike context-free grammars in that they do not have a binary normal form. In general, parsing with SCFGs takes space and time polynomial in the length of the input strings, but with the degree of the polynomial depending on the permutations of the SCFG rules. We consider linear parsing strategies, which add one nonterminal at a time. We show that for a given input permutation, the problems of finding the linear parsing strategy with the minimum space and time complexity are both NP-hard

    A deep learning integrated Lee-Carter model

    Get PDF
    In the field of mortality, the Lee–Carter based approach can be considered the milestone to forecast mortality rates among stochastic models. We could define a “Lee–Carter model family” that embraces all developments of this model, including its first formulation (1992) that remains the benchmark for comparing the performance of future models. In the Lee–Carter model, the kt parameter, describing the mortality trend over time, plays an important role about the future mortality behavior. The traditional ARIMA process usually used to model kt shows evident limitations to describe the future mortality shape. Concerning forecasting phase, academics should approach a more plausible way in order to think a nonlinear shape of the projected mortality rates. Therefore, we propose an alternative approach the ARIMA processes based on a deep learning technique. More precisely, in order to catch the pattern of kt series over time more accurately, we apply a Recurrent Neural Network with a Long Short-Term Memory architecture and integrate the Lee–Carter model to improve its predictive capacity. The proposed approach provides significant performance in terms of predictive accuracy and also allow for avoiding the time-chunks’ a priori selection. Indeed, it is a common practice among academics to delete the time in which the noise is overflowing or the data quality is insufficient. The strength of the Long Short-Term Memory network lies in its ability to treat this noise and adequately reproduce it into the forecasted trend, due to its own architecture enabling to take into account significant long-term patterns
    • 

    corecore